Search results for "Computational learning theory"

showing 10 items of 10 documents

Adjusted bat algorithm for tuning of support vector machine parameters

2016

Support vector machines are powerful and often used technique of supervised learning applied to classification. Quality of the constructed classifier can be improved by appropriate selection of the learning parameters. These parameters are often tuned using grid search with relatively large step. This optimization process can be done computationally more efficiently and more precisely using stochastic search metaheuristics. In this paper we propose adjusted bat algorithm for support vector machines parameter optimization and show that compared to the grid search it leads to a better classifier. We tested our approach on standard set of benchmark data sets from UCI machine learning repositor…

0209 industrial biotechnologyWake-sleep algorithmActive learning (machine learning)Computer scienceStability (learning theory)Linear classifier02 engineering and technologySemi-supervised learningcomputer.software_genreCross-validationRelevance vector machineKernel (linear algebra)020901 industrial engineering & automationLeast squares support vector machine0202 electrical engineering electronic engineering information engineeringMetaheuristicBat algorithmStructured support vector machinebusiness.industrySupervised learningOnline machine learningParticle swarm optimizationPattern recognitionPerceptronGeneralization errorSupport vector machineKernel methodComputational learning theoryMargin classifierHyperparameter optimization020201 artificial intelligence & image processingData miningArtificial intelligenceHyper-heuristicbusinesscomputer2016 IEEE Congress on Evolutionary Computation (CEC)
researchProduct

Learning from good examples

1995

The usual information in inductive inference for the purposes of learning an unknown recursive function f is the set of all input /output examples (n,f(n)), n ∈ ℕ. In contrast to this approach we show that it is considerably more powerful to work with finite sets of “good” examples even when these good examples are required to be effectively computable. The influence of the underlying numberings, with respect to which the learning problem has to be solved, to the capabilities of inference from good examples is also investigated. It turns out that nonstandard numberings can be much more powerful than Godel numberings.

AlgebraTransduction (machine learning)Inductive transferComputational learning theoryInductive biasbusiness.industryAlgorithmic learning theoryUnsupervised learningMulti-task learningArtificial intelligenceInstance-based learningbusinessMathematics
researchProduct

Learning formulae from elementary facts

1997

Since the seminal paper by E.M. Gold [Gol67] the computational learning theory community has been presuming that the main problem in the learning theory on the recursion-theoretical level is to restore a grammar from samples of language or a program from its sample computations. However scientists in physics and biology have become accustomed to looking for interesting assertions rather than for a universal theory explaining everything.

Computational learning theoryGrammarSample exclusion dimensionmedia_common.quotation_subjectAlgorithmic learning theoryMathematics educationLearning theoryReinforcement learningSample (statistics)Inductive reasoningmedia_commonMathematics
researchProduct

Organized Learning Models (Pursuer Control Optimisation)

1982

Abstract The concept of Organized Learning is defined, and some random models are presented. For Not Transferable Learning, it is necessary to start from an instantaneous learning; by a discrete way, we must form a stochastic model considering the probability of each path; with a continue aproximation, we can study the evolution of the internal state through to consider the relative and absolute probabilities, by means of differential equations systems. For Transferable Learning, the instantaneous learning give us directly the System evolution. So, the Algoritmes for the different models are compared.

Computer Science::Machine LearningComputational learning theoryWake-sleep algorithmActive learning (machine learning)business.industryComputer scienceCompetitive learningAlgorithmic learning theoryStability (learning theory)Online machine learningPursuerArtificial intelligencebusinessIFAC Proceedings Volumes
researchProduct

Learning Improved Feature Rankings through Decremental Input Pruning for Support Vector Based Drug Activity Prediction

2010

The use of certain machine learning and pattern recognition tools for automated pharmacological drug design has been recently introduced. Different families of learning algorithms and Support Vector Machines in particular have been applied to the task of associating observed chemical properties and pharmacological activities to certain kinds of representations of the candidate compounds. The purpose of this work, is to select an appropriate feature ordering from a large set of molecular descriptors usually used in the domain of Drug Activity Characterization. To this end, a new input pruning method is introduced and assessed with respect to commonly used feature ranking algorithms.

Computer scienceActive learning (machine learning)business.industryFeature vectorPattern recognitionMachine learningcomputer.software_genreKernel methodComputational learning theoryRanking SVMFeature (machine learning)Artificial intelligencePruning (decision trees)businessFeature learningcomputer
researchProduct

Hierarchies of probabilistic and team FIN-learning

2001

AbstractA FIN-learning machine M receives successive values of the function f it is learning and at some moment outputs a conjecture which should be a correct index of f. FIN learning has two extensions: (1) If M flips fair coins and learns a function with certain probability p, we have FIN〈p〉-learning. (2) When n machines simultaneously try to learn the same function f and at least k of these machines output correct indices of f, we have learning by a [k,n]FIN team. Sometimes a team or a probabilistic learner can simulate another one, if their probabilities p1,p2 (or team success ratios k1/n1,k2/n2) are close enough (Daley et al., in: Valiant, Waranth (Eds.), Proc. 5th Annual Workshop on C…

Discrete mathematics020203 distributed computingProbabilistic learningConjectureFinGeneral Computer ScienceIndex (typography)Probabilistic logicInductive inference0102 computer and information sciences02 engineering and technologyFunction (mathematics)01 natural sciencesTheoretical Computer ScienceMoment (mathematics)Computational learning theory010201 computation theory & mathematics0202 electrical engineering electronic engineering information engineeringTeam learningAlgorithmComputer Science(all)MathematicsTheoretical Computer Science
researchProduct

Parsimony hierarchies for inductive inference

2004

AbstractFreivalds defined an acceptable programming system independent criterion for learning programs for functions in which the final programs were required to be both correct and “nearly” minimal size. i.e.. within a computable function of being purely minimal size. Kinber showed that this parsimony requirement on final programs limits learning power. However, in scientific inference, parsimony is considered highly desirable. Alim-computable functionis (by definition) one calculable by a total procedure allowed to change its mind finitely many times about its output. Investigated is the possibility of assuaging somewhat the limitation on learning power resulting from requiring parsimonio…

Discrete mathematicsLogic68Q32limiting computable functionComputational learning theoryFunction (mathematics)Inductive reasoningNotationminimal size programConstructivePhilosophyComputable functionComputational learning theoryBounded functionArithmeticOrdinal notationconstructive ordinal notationsMathematics
researchProduct

Replacing radiative transfer models by surrogate approximations through machine learning

2015

Physically-based radiative transfer models (RTMs) help in understanding the processes occurring on the Earth's surface and their interactions with vegetation and atmosphere. However, advanced RTMs can take a long computational time, which makes them unfeasible in many real applications. To overcome this problem, it has been proposed to substitute RTMs through so-called emulators. Emulators are statistical models that approximate the functioning of RTMs. They are advantageous in real practice because of the computational efficiency and excellent accuracy and flexibility for extrapolation. We here present an ‘Emulator toolbox’ that enables analyzing three multi-output machine learning regress…

Flexibility (engineering)Atmosphere (unit)Computer sciencebusiness.industryExtrapolationStatistical modelVegetationMachine learningcomputer.software_genreAtmosphereComputational learning theoryRadiative transferArtificial intelligencebusinesscomputer2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS)
researchProduct

On the impact of forgetting on learning machines

1995

People tend not to have perfect memories when it comes to learning, or to anything else for that matter. Most formal studies of learning, however, assume a perfect memory. Some approaches have restricted the number of items that could be retained. We introduce a complexity theoretic accounting of memory utilization by learning machines. In our new model, memory is measured in bits as a function of the size of the input. There is a hierarchy of learnability based on increasing memory allotment. The lower bound results are proved using an unusual combination of pumping and mutual recursion theorem arguments. For technical reasons, it was necessary to consider two types of memory : long and sh…

Theoretical computer scienceActive learning (machine learning)Computer scienceSemi-supervised learningMutual recursionArtificial IntelligenceInstance-based learningHierarchyForgettingKolmogorov complexitybusiness.industryLearnabilityAlgorithmic learning theoryOnline machine learningInductive reasoningPumping lemma for regular languagesTerm (time)Computational learning theoryHardware and ArchitectureControl and Systems EngineeringArtificial intelligenceSequence learningbusinessSoftwareCognitive psychologyInformation SystemsJournal of the ACM
researchProduct

Query-preserving watermarking of relational databases and XML documents

2011

Watermarking allows robust and unobtrusive insertion of information in a digital document. During the last few years, techniques have been proposed for watermarking relational databases or Xml documents, where information insertion must preserve a specific measure on data (for example the mean and variance of numerical attributes). In this article we investigate the problem of watermarking databases or Xml while preserving a set of parametric queries in a specified language, up to an acceptable distortion. We first show that unrestricted databases can not be watermarked while preserving trivial parametric queries. We then exhibit query languages and classes of structures that allow guarante…

[INFO.INFO-DB]Computer Science [cs]/Databases [cs.DB]Theoretical computer scienceInformation retrievalcomputer.internet_protocolRelational databaseComputer science0102 computer and information sciences02 engineering and technologyQuery language01 natural sciencesVC dimension[ INFO.INFO-DB ] Computer Science [cs]/Databases [cs.DB]Computational learning theory010201 computation theory & mathematicsBounded function0202 electrical engineering electronic engineering information engineering[INFO.INFO-DB] Computer Science [cs]/Databases [cs.DB]Graph (abstract data type)020201 artificial intelligence & image processingcomputerDigital watermarkingComputingMilieux_MISCELLANEOUSXMLInformation SystemsParametric statisticsProceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems
researchProduct